92 research outputs found

    Partial Covering Arrays: Algorithms and Asymptotics

    Full text link
    A covering array CA(N;t,k,v)\mathsf{CA}(N;t,k,v) is an N×kN\times k array with entries in {1,2,,v}\{1, 2, \ldots , v\}, for which every N×tN\times t subarray contains each tt-tuple of {1,2,,v}t\{1, 2, \ldots , v\}^t among its rows. Covering arrays find application in interaction testing, including software and hardware testing, advanced materials development, and biological systems. A central question is to determine or bound CAN(t,k,v)\mathsf{CAN}(t,k,v), the minimum number NN of rows of a CA(N;t,k,v)\mathsf{CA}(N;t,k,v). The well known bound CAN(t,k,v)=O((t1)vtlogk)\mathsf{CAN}(t,k,v)=O((t-1)v^t\log k) is not too far from being asymptotically optimal. Sensible relaxations of the covering requirement arise when (1) the set {1,2,,v}t\{1, 2, \ldots , v\}^t need only be contained among the rows of at least (1ϵ)(kt)(1-\epsilon)\binom{k}{t} of the N×tN\times t subarrays and (2) the rows of every N×tN\times t subarray need only contain a (large) subset of {1,2,,v}t\{1, 2, \ldots , v\}^t. In this paper, using probabilistic methods, significant improvements on the covering array upper bound are established for both relaxations, and for the conjunction of the two. In each case, a randomized algorithm constructs such arrays in expected polynomial time

    Perfect Hash Families: The Generalization to Higher Indices

    Get PDF
    Perfect hash families are often represented as combinatorial arrays encoding partitions of kitems into v classes, so that every t or fewer of the items are completely separated by at least a specified number of chosen partitions. This specified number is the index of the hash family. The case when each t-set must be separated at least once has been extensively researched; they arise in diverse applications, both directly and as fundamental ingredients in a column replacement strategy for a variety of combinatorial arrays. In this paper, construction techniques and algorithmic methods for constructing perfect hash families are surveyed, in order to explore extensions to the situation when each t-set must be separated by more than one partition.https://digitalcommons.usmalibrary.org/books/1029/thumbnail.jp

    Lower bounds on multiple sequence alignment using exact 3-way alignment

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Multiple sequence alignment is fundamental. Exponential growth in computation time appears to be inevitable when an optimal alignment is required for many sequences. Exact costs of optimum alignments are therefore rarely computed. Consequently much effort has been invested in algorithms for alignment that are heuristic, or explore a restricted class of solutions. These give an upper bound on the alignment cost, but it is equally important to determine the quality of the solution obtained. In the absence of an optimal alignment with which to compare, lower bounds may be calculated to assess the quality of the alignment. As more effort is invested in improving upper bounds (alignment algorithms), it is therefore important to improve lower bounds as well. Although numerous cost metrics can be used to determine the quality of an alignment, many are based on sum-of-pairs (SP) measures and their generalizations.</p> <p>Results</p> <p>Two standard and two new methods are considered for using exact 2-way and 3-way alignments to compute lower bounds on total SP alignment cost; one new method fares well with respect to accuracy, while the other reduces the computation time. The first employs exhaustive computation of exact 3-way alignments, while the second employs an efficient heuristic to compute a much smaller number of exact 3-way alignments. Calculating all 3-way alignments exactly and computing their average improves lower bounds on sum of SP cost in <it>v</it>-way alignments. However judicious selection of a subset of all 3-way alignments can yield a further improvement with minimal additional effort. On the other hand, a simple heuristic to select a random subset of 3-way alignments (a random packing) yields accuracy comparable to averaging all 3-way alignments with substantially less computational effort.</p> <p>Conclusion</p> <p>Calculation of lower bounds on SP cost (and thus the quality of an alignment) can be improved by employing a mixture of 3-way and 2-way alignments.</p

    Minimal Obstructions for Partial Representations of Interval Graphs

    Full text link
    Interval graphs are intersection graphs of closed intervals. A generalization of recognition called partial representation extension was introduced recently. The input gives an interval graph with a partial representation specifying some pre-drawn intervals. We ask whether the remaining intervals can be added to create an extending representation. Two linear-time algorithms are known for solving this problem. In this paper, we characterize the minimal obstructions which make partial representations non-extendible. This generalizes Lekkerkerker and Boland's characterization of the minimal forbidden induced subgraphs of interval graphs. Each minimal obstruction consists of a forbidden induced subgraph together with at most four pre-drawn intervals. A Helly-type result follows: A partial representation is extendible if and only if every quadruple of pre-drawn intervals is extendible by itself. Our characterization leads to a linear-time certifying algorithm for partial representation extension

    An Efficient Local Search for Partial Latin Square Extension Problem

    Full text link
    A partial Latin square (PLS) is a partial assignment of n symbols to an nxn grid such that, in each row and in each column, each symbol appears at most once. The partial Latin square extension problem is an NP-hard problem that asks for a largest extension of a given PLS. In this paper we propose an efficient local search for this problem. We focus on the local search such that the neighborhood is defined by (p,q)-swap, i.e., removing exactly p symbols and then assigning symbols to at most q empty cells. For p in {1,2,3}, our neighborhood search algorithm finds an improved solution or concludes that no such solution exists in O(n^{p+1}) time. We also propose a novel swap operation, Trellis-swap, which is a generalization of (1,q)-swap and (2,q)-swap. Our Trellis-neighborhood search algorithm takes O(n^{3.5}) time to do the same thing. Using these neighborhood search algorithms, we design a prototype iterated local search algorithm and show its effectiveness in comparison with state-of-the-art optimization solvers such as IBM ILOG CPLEX and LocalSolver.Comment: 17 pages, 2 figure

    Making Code Voting Secure against Insider Threats using Unconditionally Secure MIX Schemes and Human PSMT Protocols

    Full text link
    Code voting was introduced by Chaum as a solution for using a possibly infected-by-malware device to cast a vote in an electronic voting application. Chaum's work on code voting assumed voting codes are physically delivered to voters using the mail system, implicitly requiring to trust the mail system. This is not necessarily a valid assumption to make - especially if the mail system cannot be trusted. When conspiring with the recipient of the cast ballots, privacy is broken. It is clear to the public that when it comes to privacy, computers and "secure" communication over the Internet cannot fully be trusted. This emphasizes the importance of using: (1) Unconditional security for secure network communication. (2) Reduce reliance on untrusted computers. In this paper we explore how to remove the mail system trust assumption in code voting. We use PSMT protocols (SCN 2012) where with the help of visual aids, humans can carry out mod10\mod 10 addition correctly with a 99\% degree of accuracy. We introduce an unconditionally secure MIX based on the combinatorics of set systems. Given that end users of our proposed voting scheme construction are humans we \emph{cannot use} classical Secure Multi Party Computation protocols. Our solutions are for both single and multi-seat elections achieving: \begin{enumerate}[i)] \item An anonymous and perfectly secure communication network secure against a tt-bounded passive adversary used to deliver voting, \item The end step of the protocol can be handled by a human to evade the threat of malware. \end{enumerate} We do not focus on active adversaries

    Relative blocking in posets

    Full text link
    Poset-theoretic generalizations of set-theoretic committee constructions are presented. The structure of the corresponding subposets is described. Sequences of irreducible fractions associated to the principal order ideals of finite bounded posets are considered and those related to the Boolean lattices are explored; it is shown that such sequences inherit all the familiar properties of the Farey sequences.Comment: 29 pages. Corrected version of original publication which is available at http://www.springerlink.com, see Corrigendu

    Finding reliable subgraphs from large probabilistic graphs

    Get PDF
    Reliable subgraphs can be used, for example, to find and rank nontrivial links between given vertices, to concisely visualize large graphs, or to reduce the size of input for computationally demanding graph algorithms. We propose two new heuristics for solving the most reliable subgraph extraction problem on large, undirected probabilistic graphs. Such a problem is specified by a probabilistic graph G subject to random edge failures, a set of terminal vertices, and an integer K. The objective is to remove K edges from G such that the probability of connecting the terminals in the remaining subgraph is maximized. We provide some technical details and a rough analysis of the proposed algorithms. The practical performance of the methods is evaluated on real probabilistic graphs from the biological domain. The results indicate that the methods scale much better to large input graphs, both computationally and in terms of the quality of the result.Reliable subgraphs can be used, for example, to find and rank nontrivial links between given vertices, to concisely visualize large graphs, or to reduce the size of input for computationally demanding graph algorithms. We propose two new heuristics for solving the most reliable subgraph extraction problem on large, undirected probabilistic graphs. Such a problem is specified by a probabilistic graph G subject to random edge failures, a set of terminal vertices, and an integer K. The objective is to remove K edges from G such that the probability of connecting the terminals in the remaining subgraph is maximized. We provide some technical details and a rough analysis of the proposed algorithms. The practical performance of the methods is evaluated on real probabilistic graphs from the biological domain. The results indicate that the methods scale much better to large input graphs, both computationally and in terms of the quality of the result.Reliable subgraphs can be used, for example, to find and rank nontrivial links between given vertices, to concisely visualize large graphs, or to reduce the size of input for computationally demanding graph algorithms. We propose two new heuristics for solving the most reliable subgraph extraction problem on large, undirected probabilistic graphs. Such a problem is specified by a probabilistic graph G subject to random edge failures, a set of terminal vertices, and an integer K. The objective is to remove K edges from G such that the probability of connecting the terminals in the remaining subgraph is maximized. We provide some technical details and a rough analysis of the proposed algorithms. The practical performance of the methods is evaluated on real probabilistic graphs from the biological domain. The results indicate that the methods scale much better to large input graphs, both computationally and in terms of the quality of the result.Peer reviewe

    Extension of Some Edge Graph Problems: Standard and Parameterized Complexity

    Get PDF
    Le PDF est une version auteur non publiée.We consider extension variants of some edge optimization problems in graphs containing the classical Edge Cover, Matching, and Edge Dominating Set problems. Given a graph G=(V,E) and an edge set U⊆E, it is asked whether there exists an inclusion-wise minimal (resp., maximal) feasible solution E′ which satisfies a given property, for instance, being an edge dominating set (resp., a matching) and containing the forced edge set U (resp., avoiding any edges from the forbidden edge set E∖U). We present hardness results for these problems, for restricted instances such as bipartite or planar graphs. We counter-balance these negative results with parameterized complexity results. We also consider the price of extension, a natural optimization problem variant of extension problems, leading to some approximation results

    The power of propagation:when GAC is enough

    Get PDF
    Considerable effort in constraint programming has focused on the development of efficient propagators for individual constraints. In this paper, we consider the combined power of such propagators when applied to collections of more than one constraint. In particular we identify classes of constraint problems where such propagators can decide the existence of a solution on their own, without the need for any additional search. Sporadic examples of such classes have previously been identified, including classes based on restricting the structure of the problem, restricting the constraint types, and some hybrid examples. However, there has previously been no unifying approach which characterises all of these classes: structural, language-based and hybrid. In this paper we develop such a unifying approach and embed all the known classes into a common framework. We then use this framework to identify a further class of problems that can be solved by propagation alone
    corecore